Vandeput's Data Science for Supply Chain Forecasting (book excerpt)

0

I am gratified to see the continuing adoption of Forecast Value Added by organizations worldwide. FVA is an easy to understand and easy to apply approach for identifying bad practices in your forecasting process. And I'm particularly gratified to see coverage of FVA in two new books, which the authors are graciously allowing The BFD to excerpt. We'll do the second book upon its release next month. For today, I want to thank author Nicolas Vandeput for sharing parts of his latest book, Data Science for Supply Chain Forecasting (De Gruyter, 2021).

Book CoverVandeput's book is based on a compelling premise: Using data science to solve a problem requires a scientific mindset more than coding skills. I share this viewpoint. FVA is essentially the application of basic scientific method to the forecasting process, starting with a null hypothesis,

Ho: The forecasting process has no effect

As a forecaster / data scientist, your aim is to determine whether Ho can be rejected, thereby concluding your forecasting process does have an effect. The effect can be either positive (the process improves accuracy) or negative (your process is just making the forecast worse). Of course, if you cannot reject the null hypothesis that your forecasting process has no effect, is the process even worth executing?

Find more discussion of this topic in Changing the Paradigm for Business Forecasting (Part 8 of 12).

Data Science for Supply Chain Forecasting

Nicolas Vandeput photoVandeput published a first edition of this book in 2018, with extensive coverage of traditional statistical / time series forecasting methods as well as the more recently popular machine learning methods. It included do-it-yourself sections illustrating the implementation of these methods in Python (and Excel for the statistical models).

This 2nd edition begins with a new Foreword by Spyros Makridakis. It includes considerable "how to" material on statistical and machine learning forecasting methods, along with much new content: adding an introduction to neural networks and an all-new Part III discussing demand forecasting process management. This new Part III features an entire chapter on Forecast Value Added, from which we share the following excerpt:

What Is a Good Forecast Error?

Throughout this book, we created forecasting models. Initially, we focused on statistical models, and later, we focused on machine learning models. Usually, you will try these models against your own dataset, first trying a simple model and then moving on to more complex ones. As soon as you get the first results from your model, you will ask yourself this very question: Are these results good? How do you know if an MAE of 20% is good? What about an MAE of 70%?

The accuracy of any model depends on the demand's inner complexity and random nature. Forecasting car sales in Norway per month is much easier than predicting the sales of a specific smartphone, in a particular shop, during one specific day. It is exceedingly difficult to estimate what is a good forecast error for a particular dataset without using a benchmark.

Benchmarking

To know if a certain level of accuracy is good or bad, you must compare it against a benchmark. As a forecast benchmark, we will use a naive model. We will then compare any forecast against this naive forecast, and see by how much extra accuracy (or error reduction) our forecasting model will beat it.

...

Process and Forecast Value Added

Forecast value added is not only meant to measure the added value of a model compared to a (naive) benchmark, but also to track the efficiency of each step in the forecasting process. By performing FVA analysis of the whole forecasting process, you will be able to highlight steps that are efficient (i.e., reducing the forecast error while not consuming too much time) and those that both consume resources and do not bring any extra accuracy. FVA is the key to process excellence. For each team (or stakeholder) working on the forecasting process, you will need to track two aspects:

    • their FVA compared to the previous team in the forecasting process flow
    • their time spent working on the forecast

Those teams can be demand planners, the sales team, senior management, and so on. (You can even track the FVA of each individual separately.) As you will track the FVA and time spent by each step in the process, you will have the right tools to reduce most judgmental bias, either intentional or unintentional.

Best Practices

Let's review the best practices when using the forecast value added framework.

    • FVA process analysis should be performed over multiple forecast cycles -- anyone can be (un)lucky from time to time
    • If you want to push FVA further and focus on the most critical items, you should use it together with weighted KPIs. Demand planners should focus on the SKUs for which the forecast model got the highest weighted error over the last periods, or on the items for which they think the model will lack insights.
    • If you are in a supply chain with different sales channels (or business units) implying different information, teams, and buying behavior, it is advised to track FVA separately for each business unit.

As a final piece of advice, we can review the conclusion drawn by Fildes and Goodwin (2007) who looked at the forecast adjusting process at four British companies. They saw that planners were making many small adjustments to the forecasts, bringing nearly no added value and consuming time. They also noted that larger adjustments were more likely to improve accuracy. This is due to the fact that they require more explanations from senior management, as well as higher (personal) risk if they are wrong. Finally, they saw that planners tend to be overly optimistic (a usual cognitive bias), resulting in too many positive adjustments. This is so much of an issue that in order to improve forecast added value, the authors provocatively suggest banning positive adjustments.

Process Efficiency

With the help of FVA, you will quickly realize that the marginal improvement of each new team working on the forecast is decreasing. It might be easy to improve the most significant shortcomings of a forecast model. But it is much more challenging to improve a forecast that has already been reviewed by a few professional teams relying on multiple sources of information. That is fine, as FVA is here to help businesses allocate the appropriate resources to forecast edition. Past a certain point, working more on the forecast will not be worth it.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top